6 research outputs found

    Visual on-line learning in distributed camera networks

    Get PDF
    Automatic detection of persons is an important application in visual surveillance. In general, state-of-the-art systems have two main disadvantages: First, usually a general detector has to be learned that is applicable to a wide range of scenes. Thus, the training is time-consuming and requires a huge amount of labeled data. Second, the data is usually processed centralized, which leads to a huge network traffic. Thus, the goal of this paper is to overcome these problems, which is realized by a person detection system, that is based on distributed smart cameras (DSCs). Assuming that we have a large number of cameras with partly overlapping views, the main idea is to reduce the model complexity of the detector by training a specific detector for each camera. These detectors are initialized by a pre-trained classifier, that is then adapted for a specific camera by co-training. In particular, for co-training we apply an on-line learning method (i.e., boosting for feature selection), where the information exchange is realized via mapping the overlapping views onto each other by using a homography. Thus, we have a compact scenedependent representation, which allows to train and to evaluate the classifiers on an embedded device. Moreover, since the information transfer is reduced to exchanging positions the required network-traffic is minimal. The power of the approach is demonstrated in various experiments on different publicly available data sets. In fact, we show that on-line learning and applying DSCs can benefit from each other. Index Terms — visual on-line learning, object detection, multi-camera networks 1

    Audio-Visual Co-Training for Vehicle Classification

    No full text
    In this paper, we introduce a fully autonomous vehicle classification system that continuously learns from large amounts of unlabeled data. For that purpose, we propose a novel on-line co-training method based on visual and acoustic information. Our system does not need complicated microphone arrays or video calibration and automatically adapts to specific traffic scenes. These specialized detectors are more accurate and more compact than general classifiers, which allows for light-weight usage in low-cost and portable embedded systems. Hence, we implemented our system on an off-the-shelf embedded platform. In the experimental part, we show that the proposed method is able to cover the desired task and outperforms single-cue systems. Furthermore, our co-training framework minimizes the labeling effort without degrading the overall system performance

    Artificial Intelligence and Internet of Things for autonomous vehicles

    No full text
    Artificial Intelligence (AI) is a machine intelligence tool providing enormous possibilities for smart industrial revolution. It facilitates gathering relevant data/information, identifying the alternatives, choosing among alternatives, taking some actions, making a decision, reviewing the decision, and predicting smartly. On the other hand, Internet of Things (IoT) is the axiom of industry 4.0 revolution, including a worldwide infrastructure for collecting and processing of the data/information from storage, actuation, sensing, advanced services and communication technologies. The combination of high-speed, resilient, low-latency connectivity, and technologies of AI and IoT will enable the transformation towards fully smart Autonomous Vehicle (AV) that illustrate the complementary between real world and digital knowledge for industry 4.0. The purpose of this book chapter is to examine how the latest approaches in AI and IoT can assist in the search for the AV. It has been shown that human errors are the source of 90% of automotive crashes, and the safest drivers drive ten times better than the average [1]. The automated vehicle safety is significant, and users are requiring 1000 times smaller acceptable risk level. Some of the incredible benefits of AVs are: (1) increasing vehicle safety, (2) reduction of accidents, (3) reduction of fuel consumption, (4) releasing of driver time and business opportunities, (5) new potential market opportunities, and (6) reduced emissions and dust particles. However, AVs must use large-scale data/information from their sensors and devices
    corecore